33 research outputs found
Status Report of the DPHEP Study Group: Towards a Global Effort for Sustainable Data Preservation in High Energy Physics
Data from high-energy physics (HEP) experiments are collected with
significant financial and human effort and are mostly unique. An
inter-experimental study group on HEP data preservation and long-term analysis
was convened as a panel of the International Committee for Future Accelerators
(ICFA). The group was formed by large collider-based experiments and
investigated the technical and organisational aspects of HEP data preservation.
An intermediate report was released in November 2009 addressing the general
issues of data preservation in HEP. This paper includes and extends the
intermediate report. It provides an analysis of the research case for data
preservation and a detailed description of the various projects at experiment,
laboratory and international levels. In addition, the paper provides a concrete
proposal for an international organisation in charge of the data management and
policies in high-energy physics
A Roadmap for HEP Software and Computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe
Report on the Workshop on Sustainable Software Sustainability 2019 (WOSSS19)
This report is based on the discussions and presentations that took place at the Workshop on Sustainable Software Sustainability (www.software.ac.uk/wosss19) in April 2019 (WOSSS19). It captures the state of the art for a range of Software Sustainability themes that were brought up by the organisers and attendees of the workshop
25th International Conference on Computing in High Energy & Nuclear Physics
Computing for the Large Hadron Collider (LHC) at CERN arguably started shortly after the commencement of data taking at the previous machine â LEP â some would argue it was even before. Without specifying an exact date, it was certainly prior to when todayâs large(st) collaborations, namely ATLAS and CMS, had formed and been approved and before the LHC itself was given the official go-ahead at the 100th meeting of the CERN Council in 1995. Approximately the first decade was spent doing research and development; the second â from the beginning of the new millennium â on grid exploration and hardening; and the third providing support to LHC data taking, production, analysis and most importantly obtaining results
Collaborative Long-Term Data Preservation: From Hundreds of PB to Tens of EB
In 2012, the Study Group on Data Preservation in High Energy Physics (HEP) for LongâTerm Analysis, more frequently know as DPHEP, published a Blueprint report detailing the motivation for, problems with and situation of data preservation across all of the main HEP laboratories worldwide. In September of that year, an open workshop was held in Krakow to prepare an update to the European Strategy for Particle Physics (ESPP) that was formally adopted by a special session of CERNâs Council in May 2013 in Brussels and key elements from the Blueprint were input to that discussion. A new update round to the ESPP has recently been launched and it is timely to consider the progress made since 2012/2013, list the outstanding problems and possible future directions for this work
Grid today, clouds on the horizon
By the time of CCP 2008, the largest scientific machine in the world â the Large Hadron Collider â had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy (7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our âHiggs in one basketâ â that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219â223]. After many years of preparation, 2008 saw a final âCommon Computing Readiness Challengeâ (CCRC'08) â aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change â as always â is on the horizon. The current funding model for Grids â which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America â is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of âCloud Computingâ are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term